In previous section, we discussed the discrete-time processes. We now define a continuous time stochastic process Xt has Markov property if P(Xt=y∣Xr,0≤r≤s)=P(Xt=y∣Xs) for all t≥s and Time-homogeneous if P(Xt+s=y∣Xs=x)=P(Xt=y∣X0=x).
It still follows Chapman-Kolmogorov equation: pxy(s+t)=∑z∈Spxz(s)pzy(t).
A (homogeneous) Poisson process with intensity λ>0 is a collection of {Nt:t≥0} of non-decreasing integer-valued random variables that satisfies the following properties:
N0=0
0≤t1≤…≤tn⟹Nt1,Nt2−Nt1,…,Ntn−Ntn−1 are independent and Nti−Nti−1∼Poisson(λ(ti−ti−1))
That is, Nt=Nt−N0∼Poi(λt), then P(Nt=j)=j!(λt)je−λt, and further we have:
E[Nt]=λt and Var[Nt]=λt, mgf MNi(t)=eλt(e−t−1)
Ni+Nj∼Poi(λ(ti+tj)), and Nt1+…+Ntn∼Poi(λ(t1+…+tn))
For different Poisson processes, combine them as {Ni,t:t≥0} with intensity λi respectively, then the superposition property holds: {∑i=1kNi,t:t≥0} has intensity λ=∑i=1kλi.
If some process Xt∼Bin(n,pn) where pn=nλt+o(n), then Xt∼Poi(λt) as n→∞.
If T is a finite stopping time for {Nt:t≥0}, then {NT+t−Nt:t≥0} is a Poisson process with intensity λT which is independent of {Ns:s≤T}.
Def: Ti=inf{t:NTi−1+t−NTi−1>0} for i≥1 is called interarrival time. And Ti are i.i.d from Exp(λ).
Def: Sn=T1+…+Tn is the arrival time for the nth event and follows Gamma(n,λ). That is P(Sn>x)=∑k=0n−1k!(λx)ke−λx.
For time Si≤t<Si+1, we havee Nt=i. And note that E[SN]=λN, and Var[SN]=λ2N.
Suppose {Nt:t≥0} is a Poisson process with intensity λ and each arrival is labeled i with probability pi, where ∑ipi=1 and let {Ni,t:t≥0} denote the preocess counting the number of arrivals labeled i. Then {Ni,t} are independent Poisson processes with intensity λpi and the processes are mutually independent.
Def: Let {Xt:t≥0} be a collection of discrete random variables taking values in state space S and that evolves in time t. {Xt} is called a continuous-time Markov chain if:
the time of Xt change follows exponential distribution with parameter λ
when state x leaves, a new state y=x is chosen according to the transition probabilities of a discrete-time Markov chain
That is, {Xt:t≥0} is composed of a discrete-time Markov chain for the transitions, the jump chain, and exponential distribution for the holding times. The λx are called holding-time parameters.
Denote x,y∈S and α(x,y) presents the probability of transition from x to y. then we have α(x)=∑y=xα(x,y).
Recall the time-homogeneous P(Xt+s=y∣Xs=x)=P(Xt=y∣X0=x), now, for a small time change Δt, we have P(Xt+Δt=x∣Xt=y)=α(y,x)Δt+o(Δt) where o(Δt) is some function of Δt satisfying limΔt→0o(Δt)/Δt=0.
to see the rate of change of px(t), we have: dΔtdpx(t)=−α(x)px(t)+∑y=xα(y,x)py(t).
Since discrete-Space state, then for each transition probability x=y,α(x,y) and x=y,−α(x) we can use a matrix G to present. That is, p′(t)=p(t)G or G=P′(0)=limt→0tP(t)−I. Such G is called infinitesimal generator of the Markov chain. And plug in the given px(0) or p(0) the vector form, we can obtain the solution p(t)=p(0)etG.
Let X0=x, T=inf{t:Xt=x}, then P(T≤Δt)=a(x)Δt+o(Δt). Recall interarrival time follows exponential distribution, here, with parameter a(x).
Let pxy(r)=P(Xt=y∣X0=x), then we have P(t) is the transition probability matrix of the Markov chain with enteires (x,y)=pxy(t). From previous result, P′(t)=P(t)G and P(0)=I. Recall the decomposition D=Q−1GQ, then we have P(t)=etG=QetDQ−1.
Therefore, the transition probability pxy=α(x,y)/α(x), holding-time parameter λx=α(x), and expected holding time E[Tx]=1/λx.
Def: A probability distribution π is called a stationary distribution of a continuous-time Markov chian iff the transition probabilities matrix πP(t)=π,∀t≥0.
Def: Sx=inf{t:Xt=x}, that is, is the sum of the holding time in all states visited before reaching x. Denote τx for the return times in the jump chain, then we have Sx=∑n=0τx−1Tn where Tn is the holding time in state n.
If Px(Sx<∞)=1, then Xt is called recurrent.
Ex[Sx]<∞ then x is called positive recurrent.
otherwise, x is called null recurrent.
Then τx<∞ and Tn<∞ for all 0≤n≤τx−1.
If Px(Sx<∞)<1, then Xt is called transient.
Notice: continuous-time lead no period property exists.
Let {Xn} be the jump chain, then this chain is positive reucrrent where S0=∑k=1τ0−1Tk, n<τ0⟹Xn=n (i.e. finite visit time) with stationary distribution(since irreducible).
Since {Xn} is recurrent, then {Xn} is also recurrent but without guarantee of positive recurrence since t is continuous.
The holding time parameters of the success-run chain are λ(k)=2k1,k≥0, that is the expected holding time is E[Tn]=λn1=2n.
Consider an irreducible continuous-time Markov chain with a recurrent jump chain. Then a stationary distribution π exists iff {Xt} is positive recurrent. The stationary distribution is unique and has π(x)>0,∀x∈S.
Consider an irreducible continuous-time Markov chain with a recurrent jump chain, a stationary distribution π and transition probabilities pxy(t) (i.e. ergodic). Then limt→∞pxy(t)=π(y),∀x,y∈S
Def: A M/M/1 queue is a queue {Qt:t≥0} with interarrival times i.i.d from Exp(λ) and service times i.i.d from Exp(μ) with generator matrix G=−λμ0⋮λ−(λ+μ)μ⋮0λ−(λ+μ)⋮00λ⋮000⋮⋯⋯⋯⋱. (Qt be the number in the queue at time t)
G/G/1 will assume both interarrival and service times are i.i.d. but not necessarily from Exp(λ) and Exp(μ).